In statistics, consistency of procedures such as confidence intervals or hypothesis tests involves their behaviour as the number of items in the data-set to which they are applied increases indefinitely. In particular, consistency requires that the outcome of the procedure should identify the underlying truth.[1] Use of the term in statistics derives from Sir Ronald Fisher in 1922.[2]
Use of the terms consistency and consistent in statistics is restricted to cases where essentially the same procedure can be applied to any number of data items. In complicated applications of statistics, there may be several ways in which the number of data items may grow. For example, records for rainfall within an area might increase in three ways: records for additional time periods; records for additional sites with a fixed area; records for extra sites obtained by extending the size of the area. In such cases, the property of consistency may be limited to one or more of the possible ways a sample size can grow.
A consistent estimator is one which, considered as a random variable indexed by the number of items in the data-set, converges to the value that the estimation procedure is designed to estimate.
An estimator that has Fisher consistency is one for which, if the estimator were calculated using the entire population rather than a sample, the true value of the estimated parameter would be obtained.
A consistent test is one for which the power of the test for a fixed untrue hypothesis increases to one as the number of data items increases.[1]